Current language models are considered to have sub-human capabilities at natural language tasks like question-answering or writing code. However, language models are not trained to perform well at these tasks, they are trained to accurately predict the next token given previous tokes in tokenized text. It is not clear whether language models are better or worse than humans at next token prediction. To try to answer this question, we performed two distinct experiments to directly compare humans and language models on this front: one measuring top-1 accuracy and the other measuring perplexity. In both experiments, we find humans to be consistently \emph{worse} than even relatively small language models like GPT3-Ada at next-token prediction.
translated by 谷歌翻译
Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, Cohen-Karlik et al. [2020] proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O(V)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O(\log V)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents a favourable tradeoff between efficient and expressive aggregators.
translated by 谷歌翻译
全息减少的表示(HRR)是通过将每个向量与抽象概念相关联,并提供数学操作以操纵向量的方法来执行符号AI的方法,以便操纵向量,就像它们是经典的符号对象一样。这种方法在较旧的象征性AI工作和认知科学之外已经很少使用。我们的目标是重新审视这种方法,以了解它是否可行,以使混合神经象征性的方法能够学习作为深度学习架构的可差分量。由于数值不稳定性,HRRS今天在可分辨率的解决方案中无效,我们通过引入迫使向量存在于空间良好的点中的投影步骤来解决问题。这样做,我们将HRRS的概念检索效果提高超过100美元。使用多标签分类,我们演示了如何利用符号HRR属性来开发能够有效学习的输出层和损耗功能,并允许我们调查HRR神经象征性学习方法的一些优缺点。我们的代码可以在https://github.com/neuromorphiccomputationResearchProgram/learning-with-hotographicuredued-representations
translated by 谷歌翻译